Convex Relaxation Methods for Nonconvex Polynomial Optimization Problems
نویسنده
چکیده
This paper introduces to constructing problems of convex relaxations for nonconvex polynomial optimization problems. Branch-and-bound algorithms are convex relaxation based. The convex envelopes are of primary importance since they represent the uniformly best convex underestimators for nonconvex polynomials over some region. The reformulationlinearization technique (RLT) generates LP (linear programming) relaxations of a quadratic problem. The LP-RLT yields a lower bound on the global minimum. RLT operates in two steps: a reformulation step and a linearization (or convexification) step. In the reformulation phase, the constraints (constraints and bounds inequalities) are replaced by new numerous pairwise products of the constraints. In the linearization phase, each distinct quadratic term is replaced by a single new RLT variable. This RLT process produces an LP relaxation. LMI formulations (linear matrix inequalities) have been proposed to treat efficiently with nonconvex sets. An LMI is equivalent to a system of polynomial inequalities. A semialgebraic convex set describes the system. The feasible sets are spectrahedra with curved faces, contrary to the LP case with polyhedra. Successive LMI relaxations of increasing size can be used to achieve the global optimum. Nonlinear inequalities are converted to an LMI form using Schur complements. Optimizing a nonconvex polynomial is equivalent to the LP over a convex set. Engineering application interests include system analysis, control theory, combinatorial optimization, statistics, and structural design optimization. Keywords—convex relaxation; polynomial optimization; nonconvex optimization; LMI formulation; structural optimization
منابع مشابه
A General Framework for Convex Relaxation of Polynomial Optimization Problems over Cones
The class of POPs (Polynomial Optimization Problems) over cones covers a wide range of optimization problems such as 0-1 integer linear and quadratic programs, nonconvex quadratic programs and bilinear matrix inequalities. This paper presents a new framework for convex relaxation of POPs over cones in terms of linear optimization problems over cones. It provides a unified treatment of many exis...
متن کاملA Tensor Analogy of Yuan's Theorem of the Alternative and Polynomial Optimization with Sign structure
Yuan’s theorem of the alternative is an important theoretical tool in optimization, which provides a checkable certificate for the infeasibility of a strict inequality system involving two homogeneous quadratic functions. In this paper, we provide a tractable extension of Yuan’s theorem of the alternative to the symmetric tensor setting. As an application, we establish that the optimal value of...
متن کاملApproximation algorithms for trilinear optimization with nonconvex constraints and its extensions
In this paper, we study trilinear optimization problems with nonconvex constraints under some assumptions. We first consider the semidefinite relaxation (SDR) of the original problem. Then motivated by So [3], we reduce the problem to that of determining the L2-diameters of certain convex bodies, which can be approximately solved in deterministic polynomial-time. After the relaxed problem being...
متن کاملStatistical Limits of Convex Relaxations
Many high dimensional sparse learning problems are formulated as nonconvex optimization. A popular approach to solve these nonconvex optimization problems is through convex relaxations such as linear and semidefinite programming. In this paper, we study the statistical limits of convex relaxations. Particularly, we consider two problems: Mean estimation for sparse principal submatrix and edge p...
متن کاملOn Solving Nonconvex Optimization Problems by Reducing The Duality Gap
Lagrangian bounds, i.e. bounds computed by Lagrangian relaxation, have been used successfully in branch and bound bound methods for solving certain classes of nonconvex optimization problems by reducing the duality gap. We discuss this method for the class of partly linear and partly convex optimization problems and, incidentally, point out incorrect results in the recent literature on this sub...
متن کامل